VASP6 How-to

Last updated on 6-7-2023

This page includes the following items and is based on VASP’s official documentations.

Note

VASP, including VASP.6.4.1, is a licensed program. It can only be installed on the server whose owner has the license for the program.

Note

Some parts of this document are based on Dr. Dung Vu’s work, especially the makefile.include.nvhpc_omp_acc file and the troubleshooting section.

Installation

We provide two installation methods for CPU and GPU. VASP6 provides several configurations.

For CPU installation, we use makefile.include.intel_omp and for GPU installation, we use makefile.include.nvhpc_omp_acc. The CPU installation uses oneAPI and is similar to the VASP5 installation; no changes to the makefile.include file is needed. The GPU installation NVIDIA HPC SDK.

Loading toolkits

Intel and Nvidia provide apt installation. To keep them in the user’s folder to avoid installing them multiple times, simply copy them into the user’s folder. Visit Intel oneAPI and NVIDIA HPC SDK for apt installation details.

VASP.6.4.1 installation

VASP.6.4.1 comes in a tarball file. Assuming that you have vasp.6.4.1.tar in your home folder, run the following commands in the terminal to untar.

cd ~
tar -xvf vasp.6.4.1.tgz
cd vasp.6.4.1

Now, to build vasp_std, vasp_gam, vasp_ncl execute the following lines. You may replace all by std, gam, or ncl to build only the necessary ones.

CPU-only installation

Load the Intel’s oneAPI and run the following lines.

source ~/intel/oneapi/setvars.sh # this may depend on your installation method
cp arch/makefile.include.intel_omp ./makefile.include
make DEPS=1 -j1 all
# make -j<number of cores> all # use this to build with multiple cores

For GPU installation

First, copy the makefile and update the following lines.

cp arch/makefile.include.nvhpc_omp_acc ./makefile.include

Warning

Unlike the CPU-only installation, we need to modify the makefile.include file and need to install FFTW3 and rsync.

# If the above fails, then NVROOT needs to be set manually
NVHPC      ?= /home/jovyan/opt/nvidia/hpc_sdk       # depends on your NVIDIA HPC SDK installation
NVVERSION   = 23.5                                  # depends on your NVIDIA HPC SDK version
NVROOT      = $(NVHPC)/Linux_x86_64/$(NVVERSION)

## Improves performance when using NV HPC-SDK >=21.11 and CUDA >11.2
OFLAG_IN   = -fast -Mwarperf
SOURCE_IN  := nonlr.o

# FFTW (mandatory)
FFTW_ROOT  ?= /opt/conda                            # depends on your FFTW installation
LLIBS      += -L$(FFTW_ROOT)/lib -lfftw3 -lfftw3_omp
INCS       += -I$(FFTW_ROOT)/include

Lastly, install FFTW3 using conda and rsync. (If you used a different method to install FFTW update the variable FFTW_ROOT above, too.)

conda install --yes fftw && \
sudo apt update && sudo install --yes rsync

We are ready to build.

export NVHPC_INSTALL_DIR=/home/jovyan/.local_sdk/opt/nvidia/hpc_sdk
export NVCOMPILERS=$NVHPC_INSTALL_DIR
export NVARCH=`uname -s`_`uname -m`
export NVHPC_VERSION=2023
export NVROOT=$NVCOMPILERS/$NVARCH/$NVHPC_VERSION

export MANPATH=$MANPATH:$NVROOT/compilers/man
export PATH=$NVROOT/compilers/bin:$PATH
export PATH=$NVROOT/comm_libs/mpi/bin:$PATH

export LD_LIBRARY_PATH=$NVROOT/compilers/extras/qd/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$NVROOT/compilers/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$NVROOT/comm_libs/11.0/nccl/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$NVROOT/comm_libs/mpi/lib/:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$NVROOT/cuda/11.0/targets/x86_64-linux/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$NVROOT/math_libs/11.0/targets/x86_64-linux/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$NVROOT/math_libs/11.8/targets/x86_64-linux/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$NVROOT/math_libs/12.1/targets/x86_64-linux/lib:$LD_LIBRARY_PATH

make DEPS=1 -j1 all
# make -j<number of cores> all # use this to build with multiple cores

Running VASP on JupyterHub

CPU-only job with Intel’s oneAPI

First load oneAPI. Here is some sample usage.

  • mpiexec -n numprocs vasp

  • mpiexec -n numprocs -genv OMP_NUM_THREADS=numthreads vasp

<vasp>

VASP binary file.

-n {numprocs}

The number of processes. E.g., -n 16 will run VASP with 16 processes.

-x OMP_NUM_THREADS={numthreads}

It sets the number of threads per process. The total number of cores used follows the formula numprocs x numthreads.

See also

Check out VASP’s Documentation https://www.vasp.at/wiki/index.php/Combining_MPI_and_OpenMP for details.

Warning

The default OMP_NUM_THREADS may not be 1 and this may lead one to choose unexpected parameters. The first few lines of VASP display the number of threads used.

running   24 mpi-ranks, with    4 threads/rank, on    1 nodes
distrk:  each k-point on   24 cores,    1 groups
distr:  one band on    1 cores,   24 groups
vasp.6.4.1 05Apr23 (build Jun 03 2023 03:01:26) complex

NVIDIA GPU job with NVIDIA’s HPC SDK

First load Nvidia HPC SDK. Here is some sample usage.

  • mpiexec -n numprocs vasp

  • mpiexec -n numprocs -genv OMP_NUM_THREADS=numthreads vasp

<vasp>

VASP binary file.

-n {numprocs}

The number of GPUs. E.g., -n 2 will run VASP with 2 processes one for each GPU.

Caution

VASP suggests having one process per one GPU.

-x OMP_NUM_THREADS={numthreads}

It sets the number of threads per process. The total number of cores used follows the formula numprocs x numthreads.

See also

Check out VASP’s Documentation https://www.vasp.at/wiki/index.php/Combining_MPI_and_OpenMP for details.

Warning

The default OMP_NUM_THREADS may not be 1 and this may lead one to choose unexpected parameters. The first few lines of VASP display the number of threads used.

running   24 mpi-ranks, with    4 threads/rank, on    1 nodes
distrk:  each k-point on   24 cores,    1 groups
distr:  one band on    1 cores,   24 groups
vasp.6.4.1 05Apr23 (build Jun 03 2023 03:01:26) complex

Troubleshooting

Note

  1. forrtl: severe (174): SIGSEGV, segmentation fault occurred: run the following command before you start vasp.

    ulimit -s unlimited
    
  2. Accelerator Fatal Error: call to cuMemAlloc returned error 2: Out of memory: GPU out of memory.

  3. mpirun noticed that process rank 0 with PID 0 on node jupyter-vasp6-2dnv-2damd exited on signal 4 (Illegal instruction): Intel vs AMD cpus. Use vasp6_std_nv_amd for AMD cpus (lscpu | grep "Vendor ID" to check the cpu type.)

  4. FIO/stdio: Disk quota exceeded: Need to include storage size. Contact the admin.

  5. Character error bewteen Windows and Linux: dos2unix <filename>

Miscellaneous

When running VASP on JupyterHub, it may be useful to have a way to share your workspace with your collaborators and monitor system resources. The following command will add both packages.

pip install --user jupyterlab-link-share jupyter-resource-usage